SUMMARY: This project aims to construct a predictive model using a TensorFlow convolutional neural network (CNN) and document the end-to-end steps using a template. The Fruits and Vegetables Image Recognition dataset is a multi-class classification situation where we attempt to predict one of several (more than two) possible outcomes.
INTRODUCTION: The dataset owner collected over 4,300 pieces of fruit and vegetable images and created a dataset that includes 36 classes. The idea was to build an application that recognizes the food items from the captured photo and provides different recipes that can be made using the food items.
ANALYSIS: The EfficientNetV2M model's performance achieved an accuracy score of 95.44% after 20 epochs using a separate validation dataset. After tuning the learning rate, we improved the accuracy rate to 96.87% using the same validation dataset. When we applied the model to the test dataset, the model achieved an accuracy score of 96.37%.
CONCLUSION: In this iteration, the TensorFlow EfficientNetV2M CNN model appeared suitable for modeling this dataset.
Dataset ML Model: Multi-Class classification with numerical features
Dataset Used: Kritik Seth, "Fruits and Vegetables Image Recognition Dataset," Kaggle 2020
Dataset Reference: https://www.kaggle.com/datasets/kritikseth/fruit-and-vegetable-image-recognition
One source of potential performance benchmarks: https://www.kaggle.com/datasets/kritikseth/fruit-and-vegetable-image-recognition/code
# # Install the packages to support accessing environment variable and SQL databases
# !pip install python-dotenv PyMySQL boto3
# Retrieve CPU information from the system
ncpu = !nproc
print("The number of available CPUs is:", ncpu[0])
The number of available CPUs is: 2
# Retrieve memory configuration information
from psutil import virtual_memory
ram_gb = virtual_memory().total / 1e9
print('Your runtime has {:.1f} gigabytes of available RAM\n'.format(ram_gb))
Your runtime has 13.6 gigabytes of available RAM
# Retrieve GPU configuration information
gpu_info = !nvidia-smi
gpu_info = '\n'.join(gpu_info)
print(gpu_info)
Thu Apr 7 22:25:43 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... Off | 00000000:00:04.0 Off | 0 |
| N/A 34C P0 25W / 300W | 0MiB / 16160MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
# # Mount Google Drive locally for loading the dotenv files
# from dotenv import load_dotenv
# from google.colab import drive
# drive.mount('/content/gdrive')
# gdrivePrefix = '/content/gdrive/My Drive/Colab_Downloads/'
# env_path = '/content/gdrive/My Drive/Colab Notebooks/'
# dotenv_path = env_path + "python_script.env"
# load_dotenv(dotenv_path=dotenv_path)
# Set the random seed number for reproducible results
RNG_SEED = 888
import random
random.seed(RNG_SEED)
import numpy as np
np.random.seed(RNG_SEED)
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
import sys
import math
# import boto3
import zipfile
from datetime import datetime
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
import tensorflow as tf
tf.random.set_seed(RNG_SEED)
from tensorflow import keras
from tensorflow.keras.callbacks import ReduceLROnPlateau
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# Begin the timer for the script processing
START_TIME_SCRIPT = datetime.now()
# Set up the number of CPU cores available for multi-thread processing
N_JOBS = 1
# Set up the flag to stop sending progress emails (setting to True will send status emails!)
NOTIFY_STATUS = False
# Set the percentage sizes for splitting the dataset
TEST_SET_RATIO = 0.2
VAL_SET_RATIO = 0.2
# Set the number of folds for cross validation
N_FOLDS = 5
N_ITERATIONS = 1
# Set various default modeling parameters
DEFAULT_LOSS = 'categorical_crossentropy'
DEFAULT_METRICS = ['accuracy']
DEFAULT_OPTIMIZER = tf.keras.optimizers.Adam(learning_rate=0.0001)
CLASSIFIER_ACTIVATION = 'softmax'
MAX_EPOCHS = 20
BATCH_SIZE = 16
NUM_CLASSES = 36
# CLASS_LABELS = []
# CLASS_NAMES = []
# RAW_IMAGE_SIZE = (250, 250)
TARGET_IMAGE_SIZE = (224, 224)
INPUT_IMAGE_SHAPE = (TARGET_IMAGE_SIZE[0], TARGET_IMAGE_SIZE[1], 3)
# Define the labels to use for graphing the data
TRAIN_METRIC = "accuracy"
VALIDATION_METRIC = "val_accuracy"
TRAIN_LOSS = "loss"
VALIDATION_LOSS = "val_loss"
# Define the directory locations and file names
STAGING_DIR = 'staging/'
TRAIN_DIR = 'staging/train/'
VALID_DIR = 'staging/validation/'
TEST_DIR = 'staging/test/'
TRAIN_DATASET = 'archive.zip'
# VALID_DATASET = ''
# TEST_DATASET = ''
# TRAIN_LABELS = ''
# VALID_LABELS = ''
# TEST_LABELS = ''
# OUTPUT_DIR = 'staging/'
# SAMPLE_SUBMISSION_CSV = 'sample_submission.csv'
# FINAL_SUBMISSION_CSV = 'submission.csv'
# Check the number of GPUs accessible through TensorFlow
print('Num GPUs Available:', len(tf.config.list_physical_devices('GPU')))
# Print out the TensorFlow version for confirmation
print('TensorFlow version:', tf.__version__)
Num GPUs Available: 1 TensorFlow version: 2.8.0
# Set up the email notification function
def status_notify(msg_text):
access_key = os.environ.get('SNS_ACCESS_KEY')
secret_key = os.environ.get('SNS_SECRET_KEY')
aws_region = os.environ.get('SNS_AWS_REGION')
topic_arn = os.environ.get('SNS_TOPIC_ARN')
if (access_key is None) or (secret_key is None) or (aws_region is None):
sys.exit("Incomplete notification setup info. Script Processing Aborted!!!")
sns = boto3.client('sns', aws_access_key_id=access_key, aws_secret_access_key=secret_key, region_name=aws_region)
response = sns.publish(TopicArn=topic_arn, Message=msg_text)
if response['ResponseMetadata']['HTTPStatusCode'] != 200 :
print('Status notification not OK with HTTP status code:', response['ResponseMetadata']['HTTPStatusCode'])
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 1 - Prepare Environment has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 1 - Prepare Environment completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 2 - Load and Prepare Images has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
# Clean up the old files and download directories before receiving new ones
!rm -rf staging/
# !rm archive.zip
!mkdir staging/
if not os.path.exists(TRAIN_DATASET):
!wget https://dainesanalytics.com/datasets/kaggle-kritikseth-fruit-vegetable-image/archive.zip
zip_ref = zipfile.ZipFile(TRAIN_DATASET, 'r')
zip_ref.extractall(STAGING_DIR)
zip_ref.close()
CLASS_LABELS = os.listdir(TRAIN_DIR)
print(CLASS_LABELS)
['corn', 'chilli pepper', 'ginger', 'carrot', 'sweetcorn', 'turnip', 'onion', 'beetroot', 'peas', 'paprika', 'raddish', 'orange', 'cabbage', 'banana', 'jalepeno', 'watermelon', 'tomato', 'lemon', 'pomegranate', 'grapes', 'sweetpotato', 'bell pepper', 'kiwi', 'pineapple', 'cucumber', 'eggplant', 'garlic', 'capsicum', 'spinach', 'cauliflower', 'soy beans', 'potato', 'mango', 'pear', 'lettuce', 'apple']
# Brief listing of training image files for each class
for c_label in CLASS_LABELS:
training_class_dir = os.path.join(TRAIN_DIR, c_label)
training_class_files = os.listdir(training_class_dir)
print('Number of training images for', c_label, ':', len(os.listdir(training_class_dir)))
print('Training samples for', c_label, ':', training_class_files[:5],'\n')
Number of training images for corn : 87 Training samples for corn : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for chilli pepper : 87 Training samples for chilli pepper : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_29.jpg', 'Image_73.jpg'] Number of training images for ginger : 68 Training samples for ginger : ['Image_54.jpg', 'Image_73.jpg', 'Image_28.jpg', 'Image_8.jpg', 'Image_6.jpg'] Number of training images for carrot : 82 Training samples for carrot : ['Image_23.jpg', 'Image_54.jpg', 'Image_29.jpg', 'Image_28.jpg', 'Image_94.jpg'] Number of training images for sweetcorn : 91 Training samples for sweetcorn : ['Image_23.jpg', 'Image_97.jpg', 'Image_55.png', 'Image_64.png', 'Image_45.jpg'] Number of training images for turnip : 98 Training samples for turnip : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for onion : 94 Training samples for onion : ['Image_23.jpg', 'Image_97.jpg', 'Image_37.png', 'Image_45.jpg', 'Image_54.jpg'] Number of training images for beetroot : 88 Training samples for beetroot : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_29.jpg', 'Image_28.jpg'] Number of training images for peas : 100 Training samples for peas : ['Image_23.jpg', 'Image_37.png', 'Image_45.jpg', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for paprika : 83 Training samples for paprika : ['Image_97.jpg', 'Image_45.jpg', 'Image_29.jpg', 'Image_73.jpg', 'Image_28.jpg'] Number of training images for raddish : 81 Training samples for raddish : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_38.png', 'Image_29.jpg'] Number of training images for orange : 69 Training samples for orange : ['Image_23.jpg', 'Image_45.jpg', 'Image_29.jpg', 'Image_73.jpg', 'Image_28.jpg'] Number of training images for cabbage : 92 Training samples for cabbage : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_49.JPG', 'Image_54.jpg'] Number of training images for banana : 75 Training samples for banana : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_73.jpg'] Number of training images for jalepeno : 88 Training samples for jalepeno : ['Image_23.jpg', 'Image_70.png', 'Image_45.jpg', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for watermelon : 84 Training samples for watermelon : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_29.jpg', 'Image_73.jpg'] Number of training images for tomato : 92 Training samples for tomato : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for lemon : 82 Training samples for lemon : ['Image_23.jpg', 'Image_97.jpg', 'Image_37.png', 'Image_64.png', 'Image_54.jpg'] Number of training images for pomegranate : 79 Training samples for pomegranate : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for grapes : 100 Training samples for grapes : ['Image_23.jpg', 'Image_97.jpg', 'Image_49.JPG', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for sweetpotato : 69 Training samples for sweetpotato : ['Image_23.jpg', 'Image_97.jpg', 'Image_54.jpg', 'Image_29.jpg', 'Image_73.jpg'] Number of training images for bell pepper : 90 Training samples for bell pepper : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for kiwi : 88 Training samples for kiwi : ['Image_23.jpg', 'Image_97.jpg', 'Image_37.png', 'Image_64.png', 'Image_54.jpg'] Number of training images for pineapple : 99 Training samples for pineapple : ['Image_23.jpg', 'Image_97.jpg', 'Image_54.jpg', 'Image_73.jpg', 'Image_28.jpg'] Number of training images for cucumber : 94 Training samples for cucumber : ['Image_23.jpg', 'Image_97.jpg', 'Image_55.png', 'Image_45.jpg', 'Image_13.JPG'] Number of training images for eggplant : 84 Training samples for eggplant : ['Image_97.jpg', 'Image_45.jpg', 'Image_38.png', 'Image_29.jpg', 'Image_73.jpg'] Number of training images for garlic : 92 Training samples for garlic : ['Image_23.jpg', 'Image_97.jpg', 'Image_54.jpg', 'Image_29.jpg', 'Image_28.jpg'] Number of training images for capsicum : 89 Training samples for capsicum : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for spinach : 97 Training samples for spinach : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for cauliflower : 79 Training samples for cauliflower : ['Image_23.jpg', 'Image_97.jpg', 'Image_29.jpg', 'Image_73.jpg', 'Image_34.JPG'] Number of training images for soy beans : 97 Training samples for soy beans : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for potato : 77 Training samples for potato : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_29.jpg'] Number of training images for mango : 86 Training samples for mango : ['Image_97.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_73.jpg', 'Image_28.jpg'] Number of training images for pear : 89 Training samples for pear : ['Image_23.jpg', 'Image_97.jpg', 'Image_45.jpg', 'Image_38.png', 'Image_54.jpg'] Number of training images for lettuce : 97 Training samples for lettuce : ['Image_97.jpg', 'Image_23.jpeg', 'Image_70.png', 'Image_45.jpg', 'Image_54.jpg'] Number of training images for apple : 68 Training samples for apple : ['Image_23.jpg', 'Image_45.jpg', 'Image_54.jpg', 'Image_71.png', 'Image_28.jpg']
# Brief listing of test image files for each class
for c_label in CLASS_LABELS:
test_class_dir = os.path.join(VALID_DIR, c_label)
test_class_files = os.listdir(test_class_dir)
print('Number of test images for', c_label, ':', len(os.listdir(test_class_dir)))
print('Training samples for', c_label, ':')
print(test_class_files[:5],'\n')
Number of test images for corn : 10 Training samples for corn : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for chilli pepper : 9 Training samples for chilli pepper : ['Image_8.jpg', 'Image_2.png', 'Image_4.jpg', 'Image_5.png', 'Image_1.jpg'] Number of test images for ginger : 10 Training samples for ginger : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for carrot : 9 Training samples for carrot : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.png', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for sweetcorn : 10 Training samples for sweetcorn : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for turnip : 10 Training samples for turnip : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for onion : 10 Training samples for onion : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_7.png'] Number of test images for beetroot : 10 Training samples for beetroot : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for peas : 10 Training samples for peas : ['Image_8.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg', 'Image_3.jpg'] Number of test images for paprika : 10 Training samples for paprika : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for raddish : 9 Training samples for raddish : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_5.png', 'Image_2.jpg'] Number of test images for orange : 9 Training samples for orange : ['Image_6.jpg', 'Image_10.png', 'Image_4.jpg', 'Image_2.jpg', 'Image_3.jpg'] Number of test images for cabbage : 10 Training samples for cabbage : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for banana : 9 Training samples for banana : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for jalepeno : 9 Training samples for jalepeno : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_3.jpg'] Number of test images for watermelon : 10 Training samples for watermelon : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for tomato : 10 Training samples for tomato : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for lemon : 10 Training samples for lemon : ['Image_8.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_9.png', 'Image_3.jpg'] Number of test images for pomegranate : 10 Training samples for pomegranate : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for grapes : 9 Training samples for grapes : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for sweetpotato : 10 Training samples for sweetpotato : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for bell pepper : 9 Training samples for bell pepper : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for kiwi : 10 Training samples for kiwi : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for pineapple : 10 Training samples for pineapple : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for cucumber : 10 Training samples for cucumber : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for eggplant : 10 Training samples for eggplant : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for garlic : 10 Training samples for garlic : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for capsicum : 10 Training samples for capsicum : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_3.JPG'] Number of test images for spinach : 10 Training samples for spinach : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for cauliflower : 10 Training samples for cauliflower : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for soy beans : 10 Training samples for soy beans : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for potato : 10 Training samples for potato : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for mango : 10 Training samples for mango : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for pear : 10 Training samples for pear : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for lettuce : 9 Training samples for lettuce : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for apple : 10 Training samples for apple : ['Image_8.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg', 'Image_3.jpg']
# Plot some training images from the dataset
nrows = len(CLASS_LABELS)
ncols = 4
training_examples = []
example_labels = []
fig = plt.gcf()
fig.set_size_inches(ncols * 4, nrows * 3)
for c_label in CLASS_LABELS:
training_class_dir = os.path.join(TRAIN_DIR, c_label)
training_class_files = os.listdir(training_class_dir)
for j in range(ncols):
training_examples.append(training_class_dir + '/' + training_class_files[j])
example_labels.append(c_label)
# print(training_examples)
# print(example_labels)
for i, img_path in enumerate(training_examples):
# Set up subplot; subplot indices start at 1
sp = plt.subplot(nrows, ncols, i+1)
sp.text(0, 0, example_labels[i])
# sp.axis('Off')
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
datagen_kwargs = dict(rescale=1./255)
training_datagen = ImageDataGenerator(**datagen_kwargs)
validation_datagen = ImageDataGenerator(**datagen_kwargs)
dataflow_kwargs = dict(class_mode="categorical")
do_data_augmentation = True
if do_data_augmentation:
training_datagen = ImageDataGenerator(rotation_range=45,
horizontal_flip=True,
vertical_flip=True,
**datagen_kwargs)
print('Loading and pre-processing the training images...')
training_generator = training_datagen.flow_from_directory(directory=TRAIN_DIR,
target_size=TARGET_IMAGE_SIZE,
batch_size=BATCH_SIZE,
shuffle=True,
seed=RNG_SEED,
**dataflow_kwargs)
print('Number of training image batches per epoch of modeling:', len(training_generator))
print('Loading and pre-processing the validation images...')
validation_generator = validation_datagen.flow_from_directory(directory=VALID_DIR,
target_size=TARGET_IMAGE_SIZE,
batch_size=BATCH_SIZE,
shuffle=False,
**dataflow_kwargs)
print('Number of validation image batches per epoch of modeling:', len(validation_generator))
Loading and pre-processing the training images... Found 3115 images belonging to 36 classes. Number of training image batches per epoch of modeling: 195 Loading and pre-processing the validation images... Found 351 images belonging to 36 classes. Number of validation image batches per epoch of modeling: 22
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 2 - Load and Prepare Images completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 3 - Define and Train Models has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
# Define the function for plotting training results for comparison
def plot_metrics(history):
fig, axs = plt.subplots(1, 2, figsize=(24, 15))
metrics = [TRAIN_LOSS, TRAIN_METRIC]
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color='blue', label='Train')
plt.plot(history.epoch, history.history['val_'+metric], color='red', linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == TRAIN_LOSS:
plt.ylim([0, plt.ylim()[1]])
else:
plt.ylim([0, 1])
plt.legend()
# Define the baseline model for benchmarking
def create_nn_model(input_param=INPUT_IMAGE_SHAPE, output_param=NUM_CLASSES, dense_nodes=2048,
classifier_activation=CLASSIFIER_ACTIVATION, loss_param=DEFAULT_LOSS,
opt_param=DEFAULT_OPTIMIZER, metrics_param=DEFAULT_METRICS):
base_model = keras.applications.efficientnet_v2.EfficientNetV2M(include_top=False, weights='imagenet', input_shape=input_param)
nn_model = keras.models.Sequential()
nn_model.add(base_model)
nn_model.add(keras.layers.Flatten())
nn_model.add(keras.layers.Dense(dense_nodes, activation='relu')),
nn_model.add(keras.layers.Dense(output_param, activation=classifier_activation))
nn_model.compile(loss=loss_param, optimizer=opt_param, metrics=metrics_param)
return nn_model
# Initialize the neural network model and get the training results for plotting graph
start_time_module = datetime.now()
tf.keras.utils.set_random_seed(RNG_SEED)
baseline_model = create_nn_model()
baseline_model_history = baseline_model.fit(training_generator,
epochs=MAX_EPOCHS,
validation_data=validation_generator,
verbose=1)
print('Total time for model fitting:', (datetime.now() - start_time_module))
Epoch 1/20 10/195 [>.............................] - ETA: 1:54 - loss: 4.7523 - accuracy: 0.0625
/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0. warnings.warn(str(msg))
29/195 [===>..........................] - ETA: 2:08 - loss: 3.9730 - accuracy: 0.1220
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images "Palette images with Transparency expressed in bytes should be "
195/195 [==============================] - 224s 963ms/step - loss: 2.0411 - accuracy: 0.4748 - val_loss: 0.5848 - val_accuracy: 0.8205 Epoch 2/20 195/195 [==============================] - 177s 905ms/step - loss: 0.9410 - accuracy: 0.7204 - val_loss: 0.4088 - val_accuracy: 0.8604 Epoch 3/20 195/195 [==============================] - 176s 905ms/step - loss: 0.6581 - accuracy: 0.7955 - val_loss: 0.4342 - val_accuracy: 0.8490 Epoch 4/20 195/195 [==============================] - 175s 900ms/step - loss: 0.4932 - accuracy: 0.8411 - val_loss: 0.3073 - val_accuracy: 0.8917 Epoch 5/20 195/195 [==============================] - 175s 900ms/step - loss: 0.3780 - accuracy: 0.8745 - val_loss: 0.2397 - val_accuracy: 0.9174 Epoch 6/20 195/195 [==============================] - 175s 899ms/step - loss: 0.3330 - accuracy: 0.8902 - val_loss: 0.1613 - val_accuracy: 0.9373 Epoch 7/20 195/195 [==============================] - 175s 896ms/step - loss: 0.3170 - accuracy: 0.8992 - val_loss: 0.2547 - val_accuracy: 0.9145 Epoch 8/20 195/195 [==============================] - 175s 896ms/step - loss: 0.2654 - accuracy: 0.9197 - val_loss: 0.2596 - val_accuracy: 0.9231 Epoch 9/20 195/195 [==============================] - 175s 892ms/step - loss: 0.2639 - accuracy: 0.9172 - val_loss: 0.4158 - val_accuracy: 0.8832 Epoch 10/20 195/195 [==============================] - 175s 895ms/step - loss: 0.1966 - accuracy: 0.9345 - val_loss: 0.2190 - val_accuracy: 0.9516 Epoch 11/20 195/195 [==============================] - 175s 894ms/step - loss: 0.2359 - accuracy: 0.9223 - val_loss: 0.4565 - val_accuracy: 0.8718 Epoch 12/20 195/195 [==============================] - 175s 900ms/step - loss: 0.1894 - accuracy: 0.9374 - val_loss: 0.1552 - val_accuracy: 0.9601 Epoch 13/20 195/195 [==============================] - 175s 899ms/step - loss: 0.1798 - accuracy: 0.9457 - val_loss: 0.1627 - val_accuracy: 0.9573 Epoch 14/20 195/195 [==============================] - 175s 895ms/step - loss: 0.1523 - accuracy: 0.9499 - val_loss: 0.3422 - val_accuracy: 0.9060 Epoch 15/20 195/195 [==============================] - 175s 896ms/step - loss: 0.1891 - accuracy: 0.9371 - val_loss: 0.2171 - val_accuracy: 0.9373 Epoch 16/20 195/195 [==============================] - 175s 898ms/step - loss: 0.1894 - accuracy: 0.9454 - val_loss: 0.8045 - val_accuracy: 0.7892 Epoch 17/20 195/195 [==============================] - 175s 899ms/step - loss: 0.1853 - accuracy: 0.9448 - val_loss: 0.1607 - val_accuracy: 0.9601 Epoch 18/20 195/195 [==============================] - 175s 899ms/step - loss: 0.1337 - accuracy: 0.9573 - val_loss: 0.2121 - val_accuracy: 0.9459 Epoch 19/20 195/195 [==============================] - 176s 905ms/step - loss: 0.1533 - accuracy: 0.9522 - val_loss: 0.5818 - val_accuracy: 0.8575 Epoch 20/20 195/195 [==============================] - 177s 908ms/step - loss: 0.1629 - accuracy: 0.9509 - val_loss: 0.1561 - val_accuracy: 0.9544 Total time for model fitting: 0:59:53.230115
baseline_model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
efficientnetv2-m (Functiona (None, 7, 7, 1280) 53150388
l)
flatten (Flatten) (None, 62720) 0
dense (Dense) (None, 2048) 128452608
dense_1 (Dense) (None, 36) 73764
=================================================================
Total params: 181,676,760
Trainable params: 181,384,728
Non-trainable params: 292,032
_________________________________________________________________
plot_metrics(baseline_model_history)
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 3 - Define and Train Models completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 4 - Tune and Optimize Models has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
# Initialize the neural network model and get the training results for plotting graph
start_time_module = datetime.now()
learning_rate_reduction = ReduceLROnPlateau(monitor='val_accuracy', patience=3, verbose=1, factor=0.5, min_lr=0.0000125)
TUNE_OPTIMIZER = tf.keras.optimizers.Adam(learning_rate=0.00005)
tf.keras.utils.set_random_seed(RNG_SEED)
tune_model = create_nn_model(opt_param=TUNE_OPTIMIZER)
tune_model_history = tune_model.fit(training_generator,
epochs=MAX_EPOCHS,
validation_data=validation_generator,
callbacks=[learning_rate_reduction],
verbose=1)
print('Total time for model fitting:', (datetime.now() - start_time_module))
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images "Palette images with Transparency expressed in bytes should be "
Epoch 1/20 36/195 [====>.........................] - ETA: 1:52 - loss: 3.5576 - accuracy: 0.1471
/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0. warnings.warn(str(msg))
195/195 [==============================] - 207s 930ms/step - loss: 2.1614 - accuracy: 0.4353 - val_loss: 0.7873 - val_accuracy: 0.7749 - lr: 5.0000e-05 Epoch 2/20 195/195 [==============================] - 178s 912ms/step - loss: 0.9650 - accuracy: 0.7181 - val_loss: 0.3970 - val_accuracy: 0.8803 - lr: 5.0000e-05 Epoch 3/20 195/195 [==============================] - 178s 913ms/step - loss: 0.6366 - accuracy: 0.8064 - val_loss: 0.3185 - val_accuracy: 0.8974 - lr: 5.0000e-05 Epoch 4/20 195/195 [==============================] - 176s 903ms/step - loss: 0.4847 - accuracy: 0.8488 - val_loss: 0.2343 - val_accuracy: 0.9231 - lr: 5.0000e-05 Epoch 5/20 195/195 [==============================] - 176s 903ms/step - loss: 0.3954 - accuracy: 0.8729 - val_loss: 0.2029 - val_accuracy: 0.9373 - lr: 5.0000e-05 Epoch 6/20 195/195 [==============================] - 177s 903ms/step - loss: 0.3417 - accuracy: 0.8918 - val_loss: 0.1590 - val_accuracy: 0.9516 - lr: 5.0000e-05 Epoch 7/20 195/195 [==============================] - 177s 905ms/step - loss: 0.2771 - accuracy: 0.9079 - val_loss: 0.1656 - val_accuracy: 0.9516 - lr: 5.0000e-05 Epoch 8/20 195/195 [==============================] - 176s 905ms/step - loss: 0.2396 - accuracy: 0.9207 - val_loss: 0.1848 - val_accuracy: 0.9573 - lr: 5.0000e-05 Epoch 9/20 195/195 [==============================] - 176s 905ms/step - loss: 0.2070 - accuracy: 0.9310 - val_loss: 0.1707 - val_accuracy: 0.9430 - lr: 5.0000e-05 Epoch 10/20 195/195 [==============================] - 177s 907ms/step - loss: 0.1848 - accuracy: 0.9374 - val_loss: 0.1523 - val_accuracy: 0.9601 - lr: 5.0000e-05 Epoch 11/20 195/195 [==============================] - 178s 913ms/step - loss: 0.1674 - accuracy: 0.9454 - val_loss: 0.1865 - val_accuracy: 0.9601 - lr: 5.0000e-05 Epoch 12/20 195/195 [==============================] - 178s 913ms/step - loss: 0.1810 - accuracy: 0.9435 - val_loss: 0.1382 - val_accuracy: 0.9544 - lr: 5.0000e-05 Epoch 13/20 195/195 [==============================] - 178s 909ms/step - loss: 0.1560 - accuracy: 0.9477 - val_loss: 0.1582 - val_accuracy: 0.9630 - lr: 5.0000e-05 Epoch 14/20 195/195 [==============================] - 178s 912ms/step - loss: 0.1380 - accuracy: 0.9544 - val_loss: 0.1419 - val_accuracy: 0.9630 - lr: 5.0000e-05 Epoch 15/20 195/195 [==============================] - 178s 913ms/step - loss: 0.1042 - accuracy: 0.9650 - val_loss: 0.1213 - val_accuracy: 0.9658 - lr: 5.0000e-05 Epoch 16/20 195/195 [==============================] - 179s 917ms/step - loss: 0.1286 - accuracy: 0.9615 - val_loss: 0.1364 - val_accuracy: 0.9573 - lr: 5.0000e-05 Epoch 17/20 195/195 [==============================] - 178s 915ms/step - loss: 0.1526 - accuracy: 0.9515 - val_loss: 0.1640 - val_accuracy: 0.9544 - lr: 5.0000e-05 Epoch 18/20 195/195 [==============================] - ETA: 0s - loss: 0.1191 - accuracy: 0.9634 Epoch 18: ReduceLROnPlateau reducing learning rate to 2.499999936844688e-05. 195/195 [==============================] - 178s 914ms/step - loss: 0.1191 - accuracy: 0.9634 - val_loss: 0.1823 - val_accuracy: 0.9573 - lr: 5.0000e-05 Epoch 19/20 195/195 [==============================] - 179s 913ms/step - loss: 0.0944 - accuracy: 0.9682 - val_loss: 0.1533 - val_accuracy: 0.9630 - lr: 2.5000e-05 Epoch 20/20 195/195 [==============================] - 181s 928ms/step - loss: 0.0518 - accuracy: 0.9836 - val_loss: 0.1481 - val_accuracy: 0.9687 - lr: 2.5000e-05 Total time for model fitting: 0:59:51.709243
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 4 - Tune and Optimize Models completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 5 - Finalize Model and Make Predictions has begun on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
FINAL_OPTIMIZER = tf.keras.optimizers.Adam(learning_rate=0.000025)
FINAL_EPOCHS = MAX_EPOCHS
tf.keras.utils.set_random_seed(RNG_SEED)
final_model = create_nn_model(opt_param=FINAL_OPTIMIZER)
final_model.fit(training_generator, epochs=FINAL_EPOCHS, verbose=1)
final_model.summary()
Epoch 1/20 19/195 [=>............................] - ETA: 2:33 - loss: 3.7887 - accuracy: 0.0870
/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0. warnings.warn(str(msg))
87/195 [============>.................] - ETA: 1:30 - loss: 3.0272 - accuracy: 0.2358
/usr/local/lib/python3.7/dist-packages/PIL/Image.py:960: UserWarning: Palette images with Transparency expressed in bytes should be converted to RGBA images "Palette images with Transparency expressed in bytes should be "
195/195 [==============================] - 187s 823ms/step - loss: 2.3192 - accuracy: 0.3907
Epoch 2/20
195/195 [==============================] - 159s 817ms/step - loss: 1.1044 - accuracy: 0.6796
Epoch 3/20
195/195 [==============================] - 160s 818ms/step - loss: 0.7778 - accuracy: 0.7624
Epoch 4/20
195/195 [==============================] - 161s 824ms/step - loss: 0.5985 - accuracy: 0.8154
Epoch 5/20
195/195 [==============================] - 162s 828ms/step - loss: 0.4677 - accuracy: 0.8517
Epoch 6/20
195/195 [==============================] - 161s 825ms/step - loss: 0.3989 - accuracy: 0.8620
Epoch 7/20
195/195 [==============================] - 159s 816ms/step - loss: 0.3060 - accuracy: 0.9047
Epoch 8/20
195/195 [==============================] - 159s 818ms/step - loss: 0.2937 - accuracy: 0.9040
Epoch 9/20
195/195 [==============================] - 159s 817ms/step - loss: 0.2332 - accuracy: 0.9246
Epoch 10/20
195/195 [==============================] - 159s 815ms/step - loss: 0.2074 - accuracy: 0.9374
Epoch 11/20
195/195 [==============================] - 160s 818ms/step - loss: 0.1822 - accuracy: 0.9403
Epoch 12/20
195/195 [==============================] - 160s 818ms/step - loss: 0.1817 - accuracy: 0.9400
Epoch 13/20
195/195 [==============================] - 160s 822ms/step - loss: 0.1281 - accuracy: 0.9579
Epoch 14/20
195/195 [==============================] - 160s 820ms/step - loss: 0.1332 - accuracy: 0.9528
Epoch 15/20
195/195 [==============================] - 159s 816ms/step - loss: 0.1391 - accuracy: 0.9560
Epoch 16/20
195/195 [==============================] - 159s 815ms/step - loss: 0.1320 - accuracy: 0.9576
Epoch 17/20
195/195 [==============================] - 159s 815ms/step - loss: 0.1161 - accuracy: 0.9621
Epoch 18/20
195/195 [==============================] - 159s 816ms/step - loss: 0.1005 - accuracy: 0.9669
Epoch 19/20
195/195 [==============================] - 159s 816ms/step - loss: 0.1184 - accuracy: 0.9644
Epoch 20/20
195/195 [==============================] - 159s 817ms/step - loss: 0.0884 - accuracy: 0.9740
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
efficientnetv2-m (Functiona (None, 7, 7, 1280) 53150388
l)
flatten_2 (Flatten) (None, 62720) 0
dense_4 (Dense) (None, 2048) 128452608
dense_5 (Dense) (None, 36) 73764
=================================================================
Total params: 181,676,760
Trainable params: 181,384,728
Non-trainable params: 292,032
_________________________________________________________________
# Brief listing of test image files for each class
for c_label in CLASS_LABELS:
test_class_dir = os.path.join(TEST_DIR, c_label)
test_class_files = os.listdir(test_class_dir)
print('Number of test images for', c_label, ':', len(os.listdir(test_class_dir)))
print('Training samples for', c_label, ':')
print(test_class_files[:5],'\n')
Number of test images for corn : 10 Training samples for corn : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for chilli pepper : 10 Training samples for chilli pepper : ['Image_8.jpg', 'Image_6.jpeg', 'Image_2.png', 'Image_4.jpg', 'Image_5.png'] Number of test images for ginger : 10 Training samples for ginger : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for carrot : 10 Training samples for carrot : ['Image_8.jpg', 'Image_9.jpeg', 'Image_6.jpg', 'Image_4.png', 'Image_2.jpg'] Number of test images for sweetcorn : 10 Training samples for sweetcorn : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for turnip : 10 Training samples for turnip : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for onion : 10 Training samples for onion : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_7.png'] Number of test images for beetroot : 10 Training samples for beetroot : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for peas : 10 Training samples for peas : ['Image_8.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg', 'Image_3.jpg'] Number of test images for paprika : 10 Training samples for paprika : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for raddish : 10 Training samples for raddish : ['Image_8.jpg', 'Image_9.jpeg', 'Image_6.jpg', 'Image_4.jpg', 'Image_5.png'] Number of test images for orange : 10 Training samples for orange : ['Image_6.jpg', 'Image_10.png', 'Image_4.jpg', 'Image_2.jpg', 'Image_8.jpeg'] Number of test images for cabbage : 10 Training samples for cabbage : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for banana : 9 Training samples for banana : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for jalepeno : 10 Training samples for jalepeno : ['Image_1.jpeg', 'Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg'] Number of test images for watermelon : 10 Training samples for watermelon : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for tomato : 10 Training samples for tomato : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for lemon : 10 Training samples for lemon : ['Image_8.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_9.png', 'Image_3.jpg'] Number of test images for pomegranate : 10 Training samples for pomegranate : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for grapes : 10 Training samples for grapes : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_5.jpeg'] Number of test images for sweetpotato : 10 Training samples for sweetpotato : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for bell pepper : 10 Training samples for bell pepper : ['Image_8.jpg', 'Image_3.jpeg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg'] Number of test images for kiwi : 10 Training samples for kiwi : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for pineapple : 10 Training samples for pineapple : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for cucumber : 10 Training samples for cucumber : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for eggplant : 10 Training samples for eggplant : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for garlic : 10 Training samples for garlic : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for capsicum : 10 Training samples for capsicum : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_3.JPG'] Number of test images for spinach : 10 Training samples for spinach : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for cauliflower : 10 Training samples for cauliflower : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for soy beans : 10 Training samples for soy beans : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for potato : 10 Training samples for potato : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for mango : 10 Training samples for mango : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for pear : 10 Training samples for pear : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg'] Number of test images for lettuce : 10 Training samples for lettuce : ['Image_8.jpg', 'Image_6.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_5.jpeg'] Number of test images for apple : 10 Training samples for apple : ['Image_8.jpg', 'Image_4.jpg', 'Image_2.jpg', 'Image_1.jpg', 'Image_3.jpg']
datagen_kwargs = dict(rescale=1./255)
test_datagen = ImageDataGenerator(**datagen_kwargs)
dataflow_kwargs = dict(class_mode="categorical")
print('Loading and pre-processing the test images...')
test_generator = validation_datagen.flow_from_directory(directory=TEST_DIR,
target_size=TARGET_IMAGE_SIZE,
batch_size=BATCH_SIZE,
shuffle=False,
**dataflow_kwargs)
print('Number of test image batches per epoch of modeling:', len(test_generator))
Loading and pre-processing the test images... Found 359 images belonging to 36 classes. Number of test image batches per epoch of modeling: 23
# Print the labels used for the modeling
print(test_generator.class_indices)
{'apple': 0, 'banana': 1, 'beetroot': 2, 'bell pepper': 3, 'cabbage': 4, 'capsicum': 5, 'carrot': 6, 'cauliflower': 7, 'chilli pepper': 8, 'corn': 9, 'cucumber': 10, 'eggplant': 11, 'garlic': 12, 'ginger': 13, 'grapes': 14, 'jalepeno': 15, 'kiwi': 16, 'lemon': 17, 'lettuce': 18, 'mango': 19, 'onion': 20, 'orange': 21, 'paprika': 22, 'pear': 23, 'peas': 24, 'pineapple': 25, 'pomegranate': 26, 'potato': 27, 'raddish': 28, 'soy beans': 29, 'spinach': 30, 'sweetcorn': 31, 'sweetpotato': 32, 'tomato': 33, 'turnip': 34, 'watermelon': 35}
final_model.evaluate(test_generator, verbose=1)
13/23 [===============>..............] - ETA: 8s - loss: 0.1667 - accuracy: 0.9567
/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0. warnings.warn(str(msg))
23/23 [==============================] - 24s 855ms/step - loss: 0.1414 - accuracy: 0.9638
[0.14140363037586212, 0.9637883305549622]
test_pred = final_model.predict(test_generator)
test_predictions = np.argmax(test_pred, axis=-1)
test_original = test_generator.labels
print('Accuracy Score:', accuracy_score(test_original, test_predictions))
print(confusion_matrix(test_original, test_predictions))
print(classification_report(test_original, test_predictions))
/usr/local/lib/python3.7/dist-packages/PIL/TiffImagePlugin.py:788: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0. warnings.warn(str(msg))
Accuracy Score: 0.9637883008356546
[[ 8 0 0 ... 0 0 0]
[ 0 7 0 ... 0 0 0]
[ 0 0 10 ... 0 0 0]
...
[ 0 0 0 ... 10 0 0]
[ 0 0 0 ... 0 10 0]
[ 0 0 0 ... 0 0 10]]
precision recall f1-score support
0 1.00 0.80 0.89 10
1 1.00 0.78 0.88 9
2 1.00 1.00 1.00 10
3 0.83 1.00 0.91 10
4 1.00 1.00 1.00 10
5 0.89 0.80 0.84 10
6 1.00 1.00 1.00 10
7 1.00 1.00 1.00 10
8 0.91 1.00 0.95 10
9 0.88 0.70 0.78 10
10 1.00 1.00 1.00 10
11 1.00 1.00 1.00 10
12 1.00 1.00 1.00 10
13 1.00 1.00 1.00 10
14 1.00 1.00 1.00 10
15 1.00 1.00 1.00 10
16 1.00 1.00 1.00 10
17 1.00 1.00 1.00 10
18 1.00 1.00 1.00 10
19 0.91 1.00 0.95 10
20 1.00 1.00 1.00 10
21 1.00 1.00 1.00 10
22 1.00 1.00 1.00 10
23 0.91 1.00 0.95 10
24 1.00 1.00 1.00 10
25 1.00 1.00 1.00 10
26 1.00 1.00 1.00 10
27 0.89 0.80 0.84 10
28 0.83 1.00 0.91 10
29 1.00 1.00 1.00 10
30 1.00 1.00 1.00 10
31 0.75 0.90 0.82 10
32 1.00 0.90 0.95 10
33 1.00 1.00 1.00 10
34 1.00 1.00 1.00 10
35 1.00 1.00 1.00 10
accuracy 0.96 359
macro avg 0.97 0.96 0.96 359
weighted avg 0.97 0.96 0.96 359
if NOTIFY_STATUS: status_notify('(TensorFlow Multi-Class) Task 5 - Finalize Model and Make Predictions completed on ' + datetime.now().strftime('%A %B %d, %Y %I:%M:%S %p'))
print ('Total time for the script:',(datetime.now() - START_TIME_SCRIPT))
Total time for the script: 2:58:20.505721